Blind watermarking provides powerful evidence for copyright protection, image authentication, and tampering identification. However, it remains a challenge to design a watermarking model with high imperceptibility and robustness against strong noise attacks. To resolve this issue, we present a framework Combining the Invertible and Non-invertible (CIN) mechanisms. The CIN is composed of the invertible part to achieve high imperceptibility and the non-invertible part to strengthen the robustness against strong noise attacks. For the invertible part, we develop a diffusion and extraction module (DEM) and a fusion and split module (FSM) to embed and extract watermarks symmetrically in an invertible way. For the non-invertible part, we introduce a non-invertible attention-based module (NIAM) and the noise-specific selection module (NSM) to solve the asymmetric extraction under a strong noise attack. Extensive experiments demonstrate that our framework outperforms the current state-of-the-art methods of imperceptibility and robustness significantly. Our framework can achieve an average of 99.99% accuracy and 67.66 dB PSNR under noise-free conditions, while 96.64% and 39.28 dB combined strong noise attacks. The code will be available in https://github.com/rmpku/CIN.
translated by 谷歌翻译
Inductive reasoning is a core component of human intelligence. In the past research of inductive reasoning within computer science, logic language is used as representations of knowledge (facts and rules, more specifically). However, logic language can cause systematic problems for inductive reasoning such as disability of handling raw input such as natural language, sensitiveness to mislabeled data, and incapacity to handle ambiguous input. To this end, we propose a new task, which is to induce natural language rules from natural language facts, and create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language. New automatic metrics are also proposed and analysed for the evaluation of this task. With DEER, we investigate a modern approach for inductive reasoning where we use natural language as representation for knowledge instead of logic language and use pretrained language models as ''reasoners''. Moreover, we provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts. We also propose a new framework drawing insights from philosophy literature for this task, which we show in the experiment section that surpasses baselines in both automatic and human evaluations.
translated by 谷歌翻译
Recently, a surge of high-quality 3D-aware GANs have been proposed, which leverage the generative power of neural rendering. It is natural to associate 3D GANs with GAN inversion methods to project a real image into the generator's latent space, allowing free-view consistent synthesis and editing, referred as 3D GAN inversion. Although with the facial prior preserved in pre-trained 3D GANs, reconstructing a 3D portrait with only one monocular image is still an ill-pose problem. The straightforward application of 2D GAN inversion methods focuses on texture similarity only while ignoring the correctness of 3D geometry shapes. It may raise geometry collapse effects, especially when reconstructing a side face under an extreme pose. Besides, the synthetic results in novel views are prone to be blurry. In this work, we propose a novel method to promote 3D GAN inversion by introducing facial symmetry prior. We design a pipeline and constraints to make full use of the pseudo auxiliary view obtained via image flipping, which helps obtain a robust and reasonable geometry shape during the inversion process. To enhance texture fidelity in unobserved viewpoints, pseudo labels from depth-guided 3D warping can provide extra supervision. We design constraints aimed at filtering out conflict areas for optimization in asymmetric situations. Comprehensive quantitative and qualitative evaluations on image reconstruction and editing demonstrate the superiority of our method.
translated by 谷歌翻译
真实世界的文本应用程序通常涉及组成广泛的文本控制操作,例如编辑文本W.R.T.属性,操纵关键字和结构,并生成所需属性的新文本。事先的工作通常会学习/芬太尼语言模型(LM)以执行操作的个人或特定子集。最近的研究以插件方式研究了合并操作,通常在复杂序列空间中以昂贵的搜索或优化进行了研究。本文提出了一种新的有效方法,用于在紧凑的文本潜在空间中进行可复合的文本操作。文本潜在矢量的低维度和不同性使我们能够基于给定的任意插入运算符(例如属性分类器)基于普通微分方程(ODE)开发有效的采样器。通过通过有效的适应性将预告片的LMS(例如GPT2)连接到潜在空间,然后我们将采样向量解码为所需的文本序列。灵活的方法允许使用来自不同域中的任何相关数据获取的各种控制操作员(情感,时态,形式,关键字等)。实验表明,在我们的方法中构成这些操作员可以生成或编辑高质量文本,从而在发电质量和效率方面显着改善了以前的方法。
translated by 谷歌翻译
为了安全地在各种复杂的交通情况下进行导航,自动驾驶系统通常配备了运动预测模块,为下游计划模块提供重要信息。对于现实世界应用应用程序,运动预测模型的准确性和延迟都是必不可少的。在本报告中,我们提出了一个有效而有效的解决方案,该解决方案是2022年Argoverse 2运动预测挑战中的第三名。
translated by 谷歌翻译
通过确保学习算法中的差异隐私,可以严格降低大型模型记忆敏感培训数据的风险。在本文中,我们为此目的研究了两种算法,即DP-SGD和DP-NSGD,它们首先剪辑或归一化\ textIt \ textIt {每样本}梯度以绑定灵敏度,然后添加噪声以使精确信息混淆。我们通过两个常见的假设分析了非凸优化设置中这两种算法的收敛行为,并实现了$ \ nathcal {o} \ left(\ sqrt [4] {\ frac {\ frac {d \ log(1/\ delta) )} {n^2 \ epsilon^2}} \ right)$ $ d $ - 二维模型,$ n $ samples和$(\ epsilon,\ delta)$ - dp,它改进了以前的改进在较弱的假设下的界限。具体而言,我们在DP-NSGD中引入了一个正规化因素,并表明它对融合证明至关重要,并巧妙地控制了偏见和噪声权衡。我们的证明故意处理针对私人环境指定的按样本梯度剪辑和标准化。从经验上讲,我们证明这两种算法达到了相似的最佳准确性,而DP-NSGD比DP-SGD更容易调整,因此在计算调整工作时可能有助于进一步节省隐私预算。
translated by 谷歌翻译
ROC曲线(AUROC)下的区域已大力应用于分类不平衡,此外,与深度学习技术相结合。但是,没有现有的工作为同行选择适当的深度AUROC最大化技术提供合理的信息。在这项工作中,我们从三个方面填补了这一空白。 (i)我们基准具有各种损失函数,具有不同的算法选择,用于深度AUROC优化问题。我们研究了两类损失功能:成对损失和复合损失,其中包括10个损失函数。有趣的是,我们发现综合损失是一种创新的损失函数类别,比训练收敛和测试概括视角的成对损失表现出更具竞争力的性能。然而,带有更损坏的标签的数据有利于成对的对称损失。 (ii)此外,我们基准并强调了基本算法选择,例如正采样率,正则化,归一化/激活和优化器。主要发现包括:较高的阳性采样率可能对深度AUROC最大化有益;不同的数据集有利于不同的正规化权重;适当的归一化技术,例如Sigmoid和$ \ ell_2 $得分归一化,可以提高模型性能。 (iii)为了优化方面,我们基于成对和复合损失的SGD型,动量类型和ADAM型优化器。我们的发现表明,尽管从训练的角度来看,亚当型方法更具竞争力,但从测试角度来看,它并不优于其他方法。
translated by 谷歌翻译
在本文中,我们提出了适用于深度学习的单向和双向部分AUC(PAUC)最大化的系统和高效的基于梯度的方法。我们通过使用分布强大的优化(DRO)来定义每个单独的积极数据的损失,提出了PAUC替代目标的新公式。我们考虑了两种DRO的配方,其中一种是基于条件 - 价值风险(CVAR),该风险(CVAR)得出了PAUC的非平滑但精确的估计器,而另一个基于KL差异正则DRO产生不确定的dro。但是PAUC的平滑(软)估计器。对于单向和双向PAUC最大化,我们提出了两种算法,并证明了它们分别优化其两种配方的收敛性。实验证明了所提出的算法对PAUC最大化的有效性,以对各种数据集进行深度学习。
translated by 谷歌翻译
实现通用语言情报是自然语言处理的长期目标,标准评估基准发挥基本和指导作用。我们认为,对于通用语言智能评估,基准本身需要全面和系统。为此,我们提出了Cuge,一种中文语言理解和生成评估基准,具有以下特征:(1)分层基准框架,其中数据集主要选择和组织语言能力 - 任务数据集层次结构。 (2)多级评分策略,其中基于分层框架提供了不同级别的模型性能。为了促进CUGE,我们提供了一个公共排行榜,可以自定义,以支持灵活的模型判断标准。代表性预先训练的语言模型的评估结果表明了对通用语言智能的完善的充足空间。 Cuge在Cuge.baai.ac.cn上公开提供。
translated by 谷歌翻译
人类服装的重建是一项重要任务,往往依赖于内在的图像分解。通过缺乏域特定的数据和粗略评估度量,现有模型无法生成可满足图形应用的结果。在本文中,我们专注于服装图像的内在图像分解并具有全面的改进。我们收集了挑剔的衣物内在图像数据集,包括合成训练集和现实世界测试集。更可解释的边缘感知度量标准和注释方案是为测试集设计的,这允许对内部模型进行诊断评估。最后,我们提出了用精心设计的损失术语和对抗模块的布料模型。它利用易于获取的标签来学习现实世界的阴影,显着提高性能,只有轻微的额外注释工作。我们表明,我们提出的模型显着减少了纹理复制的伪像,同时保持了令人惊讶的微小细节,优于现有的现有方法。
translated by 谷歌翻译